In this paper we study stochastic quasi-Newton methods for nonconvexstochastic optimization, where we assume that only stochastic information ofthe gradients of the objective function is available via a stochasticfirst-order oracle (SFO). Firstly, we propose a general framework of stochasticquasi-Newton methods for solving nonconvex stochastic optimization. Theproposed framework extends the classic quasi-Newton methods working indeterministic settings to stochastic settings, and we prove its almost sureconvergence to stationary points. Secondly, we propose a general framework fora class of randomized stochastic quasi-Newton methods, in which the number ofiterations conducted by the algorithm is a random variable. The worst-caseSFO-calls complexities of this class of methods are analyzed. Thirdly, wepresent two specific methods that fall into this framework, namely stochasticdamped-BFGS method and stochastic cyclic Barzilai-Borwein method. Finally, wereport numerical results to demonstrate the efficiency of the proposed methods.
展开▼